34 research outputs found

    Bots in Wikipedia: Unfolding their duties

    Get PDF
    The success of crowdsourcing systems such as Wikipedia relies on people participating in these systems. However, in this research we reveal to what extent human and machine intelligence is combined to carry out semi-automatic workflows of complex tasks. In Wikipedia, bots are used to realize such combination of human-machine intelligence. We provide an extensive overview on various edit types bots carry out in this regard through the analysis of 1,639 approved task requests. We classify existing tasks by an action-object-pair structure and reveal existing differences in their probability of occurrence depending on the investigated work context. In the context of community services, bots mainly create reports, whereas in the area of guidelines or policies bots are mostly responsible for adding templates to pages. Moreover, the analysis of existing bot tasks revealed insights that suggest general reasons, why Wikipedia’s editor community uses bots as well as approaches, how they organize machine tasks to provide a sustainable service. We conclude by discussing how these insights can prepare the foundation for further research

    Critical-Reflective Human-AI Collaboration: Exploring Computational Tools for Art Historical Image Retrieval

    Full text link
    Just as other disciplines, the humanities explore how computational research approaches and tools can meaningfully contribute to scholarly knowledge production. We approach the design of computational tools through the analytical lens of 'human-AI collaboration.' However, there is no generalizable concept of what constitutes 'meaningful' human-AI collaboration. In terms of genuinely human competencies, we consider criticality and reflection as guiding principles of scholarly knowledge production. Although (designing for) reflection is a recurring topic in CSCW and HCI discourses, it has not been centered in work on human-AI collaboration. We posit that integrating both concepts is a viable approach to supporting 'meaningful' human-AI collaboration in the humanities. Our research, thus, is guided by the question of how critical reflection can be enabled in human-AI collaboration. We address this question with a use case that centers on computer vision (CV) tools for art historical image retrieval. Specifically, we conducted a qualitative interview study with art historians and extended the interviews with a think-aloud software exploration. We observed and recorded our participants' interaction with a ready-to-use CV tool in a possible research scenario. We found that critical reflection, indeed, constitutes a core prerequisite for 'meaningful' human-AI collaboration in humanities research contexts. However, we observed that critical reflection was not fully realized during interaction with the CV tool. We interpret this divergence as supporting our hypothesis that computational tools need to be intentionally designed in such a way that they actively scaffold and support critical reflection during interaction. Based on our findings, we suggest four empirically grounded design implications for 'critical-reflective human-AI collaboration'

    The Impact of Concept Representation in Interactive Concept Validation (ICV)

    Get PDF
    Large scale ideation has developed as a promising new way of obtaining large numbers of highly diverse ideas for a given challenge. However, due to the scale of these challenges, algorithmic support based on a computational understanding of the ideas is a crucial component in these systems. One promising solution is the use of knowledge graphs to provide meaning. A significant obstacle lies in word-sense disambiguation, which cannot be solved by automatic approaches. In previous work, we introduce \textit{Interactive Concept Validation} (ICV) as an approach that enables ideators to disambiguate terms used in their ideas. To test the impact of different ways of representing concepts (should we show images of concepts, or only explanatory texts), we conducted experiments comparing three representations. The results show that while the impact on ideation metrics was marginal, time/click effort was lowest in the images only condition, while data quality was highest in the both condition

    Innovonto: An Enhanced Crowd Ideation Platform with Semantic Annotation (Hallway Test)

    Get PDF
    Crowd ideation platforms provide a promising approach for supporting the idea generation process. Research has shown that presenting a set of similar or diverse ideas to users during ideation leads them to come-up with more creative ideas. In this paper, we describe Innovonto, a crowd ideation platform that leverages semantic web technologies and human collaboration to identify similar and diverse ideas in order to enhance the creativity of generated ideas. Therefore, the approach implemented captures first the conceptualization of users' ideas. Then, a matching system is employed to compute similarities between all ideas in near real time. Furthermore, this technical report outlines the results obtained from the evaluation of Innovonto platform. The hallway study, conducted at our research group, allowed us to test each step of Innovonto platform as well as the proposed approach in assessing similarities between ideas. As results, we received 20 ideas and 23 feedbacks from 9 users. The analysis of the results shows good performance of Innovonto steps and confirms the findings of existing research

    Examining the Impact of Algorithm Awareness on Wikidata's Recommender System Recoin

    Get PDF
    The global infrastructure of the Web, designed as an open and transparent system, has a significant impact on our society. However, algorithmic systems of corporate entities that neglect those principles increasingly populated the Web. Typical representatives of these algorithmic systems are recommender systems that influence our society both on a scale of global politics and during mundane shopping decisions. Recently, such recommender systems have come under critique for how they may strengthen existing or even generate new kinds of biases. To this end, designers and engineers are increasingly urged to make the functioning and purpose of recommender systems more transparent. Our research relates to the discourse of algorithm awareness, that reconsiders the role of algorithm visibility in interface design. We conducted online experiments with 105 participants using MTurk for the recommender system Recoin, a gadget for Wikidata. In these experiments, we presented users with one of a set of three different designs of Recoin's user interface, each of them exhibiting a varying degree of explainability and interactivity. Our findings include a positive correlation between comprehension of and trust in an algorithmic system in our interactive redesign. However, our results are not conclusive yet, and suggest that the measures of comprehension, fairness, accuracy and trust are not yet exhaustive for the empirical study of algorithm awareness. Our qualitative insights provide a first indication for further measures. Our study participants, for example, were less concerned with the details of understanding an algorithmic calculation than with who or what is judging the result of the algorithm.Comment: 10 pages, 7 figure
    corecore